91 research outputs found

    Statistical Hardware Design With Multi-model Active Learning

    Full text link
    With the rising complexity of numerous novel applications that serve our modern society comes the strong need to design efficient computing platforms. Designing efficient hardware is, however, a complex multi-objective problem that deals with multiple parameters and their interactions. Given that there are a large number of parameters and objectives involved in hardware design, synthesizing all possible combinations is not a feasible method to find the optimal solution. One promising approach to tackle this problem is statistical modeling of a desired hardware performance. Here, we propose a model-based active learning approach to solve this problem. Our proposed method uses Bayesian models to characterize various aspects of hardware performance. We also use transfer learning and Gaussian regression bootstrapping techniques in conjunction with active learning to create more accurate models. Our proposed statistical modeling method provides hardware models that are sufficiently accurate to perform design space exploration as well as performance prediction simultaneously. We use our proposed method to perform design space exploration and performance prediction for various hardware setups, such as micro-architecture design and OpenCL kernels for FPGA targets. Our experiments show that the number of samples required to create performance models significantly reduces while maintaining the predictive power of our proposed statistical models. For instance, in our performance prediction setting, the proposed method needs 65% fewer samples to create the model, and in the design space exploration setting, our proposed method can find the best parameter settings by exploring less than 50 samples.Comment: added a reference for GRP subsampling and corrected typo

    Knowledge Management, as the Key Factor of Survival in New Competition Age

    Get PDF
    In accordance with Darwin’s theory about “Survival of the fittest”, whoever has better adapted himself to circumstances, will remain and continue the life, and vice versa, whoever can not do that, will die. If we consider organization as a biological organism, organization is essentially a special group that tries to survive and accord in a special environment, because organization, basically, is the thing that human has made it for the survival of himself. The biggest difference between human and other existing species that has conducted him to remain, progress, and development during many years, is the ability of thinking that others have very little quotient of that or don’t have at all. History has shown that many existing species that have been apparently bigger and stronger than human, like dinosaurs, have died because of their incongruity and discordance with circumstances, but human, this apparently weak species, is continuing the life with using the essential factor of thinking and making decision. Organization contains people, and the most important factor that can help it to remain, consolidate, and surpass the competitors, in this knowledge-based economy, is thinking and using the powerful mind of organization, because nowadays, after many years from Industrial Revolution, with three production factors of land, capital and labor, in the century of information and creating organized R&D, knowledge and management have become the most important factors of production, and those organizations which better use these two factors, are more successful and permanent. Thus creating and then continues improvement of knowledge management in organization, as a process that create the “mind of organization”, is inevitable, and during this course, not only knowledge of the organization’s members will be gather and according to the role of the synergy, total knowledge of a group is more than all their individual knowledge, but also available data and information from environment, specially from customers, would constantly enter as the fuel of organization\u27s mind and lead to develop” the mind of organization” which the result of that is performance found on the internal and external circumstances of organization, and finally “survival of the organization as the fittest”. This paper is going to consider and survey creating this principle of the survival of organization (the mind of organization) and improve that, technologically and socially

    Mobile Robotics, Moving Intelligence

    Get PDF

    Is Integer Arithmetic Enough for Deep Learning Training?

    Full text link
    The ever-increasing computational complexity of deep learning models makes their training and deployment difficult on various cloud and edge platforms. Replacing floating-point arithmetic with low-bit integer arithmetic is a promising approach to save energy, memory footprint, and latency of deep learning models. As such, quantization has attracted the attention of researchers in recent years. However, using integer numbers to form a fully functional integer training pipeline including forward pass, back-propagation, and stochastic gradient descent is not studied in detail. Our empirical and mathematical results reveal that integer arithmetic is enough to train deep learning models. Unlike recent proposals, instead of quantization, we directly switch the number representation of computations. Our novel training method forms a fully integer training pipeline that does not change the trajectory of the loss and accuracy compared to floating-point, nor does it need any special hyper-parameter tuning, distribution adjustment, or gradient clipping. Our experimental results show that our proposed method is effective in a wide variety of tasks such as classification (including vision transformers), object detection, and semantic segmentation

    Integer Fine-tuning of Transformer-based Models

    Full text link
    Transformer based models are used to achieve state-of-the-art performance on various deep learning tasks. Since transformer-based models have large numbers of parameters, fine-tuning them on downstream tasks is computationally intensive and energy hungry. Automatic mixed-precision FP32/FP16 fine-tuning of such models has been previously used to lower the compute resource requirements. However, with the recent advances in the low-bit integer back-propagation, it is possible to further reduce the computation and memory foot-print. In this work, we explore a novel integer training method that uses integer arithmetic for both forward propagation and gradient computation of linear, convolutional, layer-norm, and embedding layers in transformer-based models. Furthermore, we study the effect of various integer bit-widths to find the minimum required bit-width for integer fine-tuning of transformer-based models. We fine-tune BERT and ViT models on popular downstream tasks using integer layers. We show that 16-bit integer models match the floating-point baseline performance. Reducing the bit-width to 10, we observe 0.5 average score drop. Finally, further reduction of the bit-width to 8 provides an average score drop of 1.7 points

    Internet-based control for the intelligent unmanned ground vehicle:

    Get PDF
    ABSTRACT Secure remote access with inter-operatability for operating a robot can be successfully achieved using the web services provided in the .NET framework. The complete design of the machine discussed in this paper is made on the .NET framework. The server which operates the robot is configured to IIS. The algorithm for obstacle detection is coded on a different server using the .NET framework. By using web services, the robot can be accessed by other servers. These web services are consumed by the server on which the robot executes. A proxy is created on this server. The whole control is given in the form of a series of web pages which can be accessed by any web browser. However in order to input parameters and control the robot, authentication is required. The user provides authentication credentials which are matched with the existing information on the data base. After authentication, the user proceeds further to control the robot. The security and reliability of remote access is provided by the components that come with the web services namely, SOAP, WSDL and Proxy

    Determination of reference values for intraocular pressure and Schirmer tear test in clinically normal ostriches (Struthio camelus)

    Get PDF
    The purpose of this study was to establish normal physiologic reference values for intraocular pressure (IOP) and Schirmer tear test (STT) results in clinically normal ostriches (Struthio camelus). Twenty ostriches of both sexes, 10 juveniles (1.5–2 yr of age) and 10 adults, were included in this study. Complete ophthalmic examination was performed prior to this investigation. STT was performed by inserting a standard sterile STT strip over the ventral lid margin into the ventral conjunctival sac for 60 sec. Following the STT, IOP was measured using applanation tonometry with the Tono-Pen VetTM tonometer after topical instillation of one drop of 0.5% proparacaine ophthalmic solution. The mean 6 SD and range of Tono-Pen readings of IOP for all birds was 18.8 6 3.5, with a range of 12–24. Mean IOP in juvenile ostriches was 19.7 6 3.6. Mean IOP in adult ostriches was 16.9 6 2.9. There was no statistically significant difference between young and adult birds (P¼0.07). The mean STT values in the present study were 16.3 6 2.5 mm/1 min when measurements from both eyes were averaged. Mean STT in juvenile and adult ostriches was 15.4 6 1.8 and 17.2 6 2.9 mm/1 min, respectively. There was no statistically significant difference between young and adult birds (P ¼ 0.11). No statistically significant differences between genders were found for any of the results (P 0.41). In conclusion, this study provides normal reference range values for STT and IOP in clinically healthy ostriches.http://www.bioone.org/toc/zamd/41/4ab201

    Effects of nomadic grazing system and indoor concentrate feeding systems on performance, behavior, blood parameters, and meat quality of finishing lambs

    Get PDF
    The objective of the study was to evaluate the effects of three production systems on growth performance, behavior, blood parameters, carcass characteristics, and meat quality. A total of 30 lambs (n = 10 lambs/treatment) were randomly assigned to three production systems that included rotational grazing (NG) and two different levels of concentrate (CON), one with medium (roughage/concentrate ratio 50:50% based on DM, MC) and one with high concentrate (roughage/concentrate ratio 30:70% based on DM, HC) during the 90-day fattening period. At the start of the experiment, all lambs averaged 90 ± 4 days of age (mean ± SD) and were slaughtered at an average of 180 ± 3 days (mean ± SD). CON-fed lambs had higher average daily gain and loin thickness than NG-fed lambs. The NG lambs spent more time eating, drinking, and standing, but less time resting and rumination than the CON-fed lambs. In addition, plasma lipid, β-hydroxybutyrate, and urea levels were higher, while phosphorus levels were lower in NG-fed lambs than in CON-fed lambs. CON-fed lambs had better carcass yield, but gastrointestinal tract and rumen weights were lower than NG lambs. CON-fed lambs had higher pH values 0 h post mortem than the NG lambs; however, there was no effect of treatment on pH 24 h post mortem. The post-mortem color of the LD muscle of NG lambs had a higher lightness and yellowness index and a lower redness index than that of the LD muscle of CON-fed lambs. The results of this study showed that lambs fed CON had better carcass yield than lambs fed NG, although feed intake, feed conversion ratio (FCR), growth performance, carcass yield, and meat quality of lambs fed MC and HC were similar.info:eu-repo/semantics/publishedVersio

    On the convergence of stochastic gradient descent in low-precision number formats

    Get PDF
    ABSTRACT: Deep learning models are dominating almost all artificial intelligence tasks such as vision, text, and speech processing. Stochastic Gradient Descent (SGD) is the main tool for training such models, where the computations are usually performed in single-precision floating-point number format. The convergence of single-precision SGD is normally aligned with the theoretical results of real numbers since they exhibit negligible error. However, the numerical error increases when the computations are performed in low-precision number formats. This provides compelling reasons to study the SGD convergence adapted for low-precision computations. We present both deterministic and stochastic analysis of the SGD algorithm, obtaining bounds that show the effect of number format. Such bounds can provide guidelines as to how SGD convergence is affected when constraints render the possibility of performing high-precision computations remote
    • …
    corecore